Neural Networks with Few Multiplications

نویسندگان

  • Zhouhan Lin
  • Matthieu Courbariaux
  • Roland Memisevic
  • Yoshua Bengio
چکیده

For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardwarefriendly training of neural networks.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparison Study on Neural Networks in Damage Detection of Steel Truss Bridge

This paper presents the application of three main Artificial Neural Networks (ANNs) in damage detection of steel bridges. This method has the ability to indicate damage in structural elements due to a localized change of stiffness called damage zone. The changes in structural response is used to identify the states of structural damage. To circumvent the difficulty arising from the non-linear n...

متن کامل

Low precision storage for deep learning

Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those f...

متن کامل

StrassenNets: Deep learning with a multiplication budget

A large fraction of the arithmetic operations required to evaluate deep neural networks (DNNs) are due to matrix multiplications, both in convolutional and fully connected layers. Matrix multiplications can be cast as 2-layer sum-product networks (SPNs) (arithmetic circuits), disentangling multiplications and additions. We leverage this observation for end-to-end learning of low-cost (in terms ...

متن کامل

Using Neural Networks with Limited Data to Estimate Manufacturing Cost

Neural networks were used to estimate the cost of jet engine components, specifically shafts and cases. The neural network process was compared with results produced by the current conventional cost estimation software and linear regression methods. Due to the complex nature of the parts and the limited amount of information available, data expansion techniques such as doubling-data and data-cr...

متن کامل

Sequence Modeling with Recurrent Tensor Networks

We introduce the recurrent tensor network, a recurrent neural network model that replaces the matrix-vector multiplications of a standard recurrent neural network with bilinear tensor products. We compare its performance against networks that employ long short-term memory (LSTM) networks. Our results demonstrate that using tensors to capture the interactions between network inputs and history c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1510.03009  شماره 

صفحات  -

تاریخ انتشار 2015